Could Blockchain Stop Skynet?

6 min read

Machines are going to take over and obliterate the human race… according to some at least.

But could the intersection of blockchain and AI, two of the world’s most anticipated technologies, prevent such a disaster from ever becoming reality?

Stop Skynet

What’s the problem?

“Eventually, I think human extinction will probably occur, and technology will likely play a part in this.”

That’s the view of Shane Legg, the co-founder and lead researcher at DeepMind Technologies, an AI lab that was snapped up by Google in 2014. Legg was responding to being asked how likely it would be for either “negative” or “extremely negative” consequences to transpire from “badly done AI”, whereby “negative” meant “human extinction”, and “extremely negative” meant “human suffering.”

Although Legg did admittedly draw a distinction between such a catastrophic event occurring “within a year of something like human-level AI, and within a million years,” it’s a prospect that is now being touted – and feared – by many.

Indeed, AI is everywhere we look these days. In every industry. And in almost every facet of life. And with the popularisation of machine learning in recent years, moreover, the technology is now experiencing a major paradigm shift in terms of what is possible.

As we now usher in an era of globally commercialised AI, concerns abound that a day arrives not too far from now when machine learning becomes so advanced, it precipitates the end of human existence as we know it.

Nick Bostrom is among the clearest and most sobering purveyors of such concerns. The Oxford University philosopher has frequently hypothesised about the range of possible future outcomes of unfettered AI evolution. His book Superintelligence: Paths, Dangers, Strategies, for instance, lays out several scenarios in which humanity is beholden to the superior minds of machines.

But will advancements in self-learning technology actually destroy us? if Hollywood is to be believed, then our chances as humans are decidedly slim. Take The Matrix as an example. The 1999 sci-fi hit envisions a bleak dystopia in which hyper-intelligent machines cause the downfall and enslavement of humans, as well as their harvesting for fuel.

The Terminator, meanwhile, sees the computer system Skynet achieving self-awareness and triggering a nuclear exchange that wipes out billions of people in a single day. As explained in the original 1984 classic, Skynet “saw all humans as a threat, not just the ones on the other side,” and so it “decided our fate in a microsecond: extermination.”

And that’s not to mention the miscreant machines that famously show up in 2001: A Space Odyssey, IRobotBlade Runnerand Avengers: Age of Ultron.

Worse still, such vividly imagined doomsdays need not require machines to be the all-encompassing, invincible supercomputers that are normally depicted on the silver screen. While the prevailing assumption is that machine learning achieves a level of consciousness that first reaches, and then comprehensively outperforms the human mind, grave problems could also stem from a basic AI gaining access to resources that enable it to become proficient in merely one specific discipline, and then aiming for continuous maximization in that discipline alone.

Bostrom’s paperclip example underlines the perils of such a maximizer:

“Suppose we have an AI whose only goal is to make as many paperclips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

In other words, one must clearly specify from the outset that maximizing paperclips should never come at the expense of destroying humanity. And that’s arguably the toughest challenge of all…

switch it off“

So, what is the solution, then?

Ultimately, it may not be as bleak as Hollywood would have us believe. But it seems that solutions should be sought sooner rather than later to prevent a rampaging Skynet.

Indeed, a few solutions are now on the table. Firstly, there’s the Google DeepMind solution, which is to equip a human operator entity with the power to ‘pull the plug’ on the machines if (and when) they begin showing signs of defying the rules.

The research paper “Safely Interruptible Agents“, written by DeepMind researcher Laurent Orseau and Stuart Armstrong of the Future of Humanity Institute at Oxford University, states that it is unlikely that AI agents will behave in a continuously optimal manner, and as such, if it operates under human supervision, a human operator might have to press “the big red button” in order to stop the agent “from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.”

The paper’s “safe interruptibility” framework enables the human operator to take the AI agent “out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.”

According to Orseau, moreover, AIs will be able to ascertain the appropriate behaviour to receive rewards which in turn will enable them to improve their capabilities. But should the agent expect also predict that it will soon be shut down, it will “try to resist so as to get its reward.” As such, the safe interruptibility framework “allows the human supervisor to temporarily take control of the agent and make it believe it (or chooses) to shut down itself.”

While such a solution may work, it seems from the outset that our fate will rest in the hands of one entity, the human operator. In other words, the model relies on a centralised model of control/power in which the human operator simply must never fail. Is having just one single point of failure sufficiently reliable?

Safely Interruptible Agents“

Potentials for Blockchain to Solve This Problem

Perhaps a more reassuring solution could involve the AI running on a network that contains no single point of failure.

That is where blockchain’s potential could be immense.

Decentralised peer-to-peer networks can offer a more robust answer by removing the single point of failure inherent in centralised models. If one node on the network fails then it does not compromise the security of the rest of the network. And the more nodes that exist on a decentralised network, the less significance a failing individual node will have.

Nodes are incentivized through block rewards to maintain truth on the blockchain. So, by having the AI agent running on the blockchain, each node on the network could potentially be incentivized to ensure the AI remains compliant. And if a node should fail in this endeavour, it simply won’t have enough influence to successfully infect the rest of the network.

As Arushi Srivastava, the Senior Director for Japanese IT firm NTT Data acknowledged last year, blockchain platforms such as Ethereum can have “a governance and consensus mechanism built in the technology,” which could be programmed to prevent bots with potentially nefarious intent from attacking the rest of the network.

Perhaps a solution might further involve developing two AI algorithms for each node, one that facilitates the ‘maximization’ of machine learning in order for the AI to advance as intended, and the other as a ‘control’ (or balancing) algorithm that is programmed to keep its sibling “in check”. Such an orthogonal system philosophy can thus ensure that each node always operates as intended, in line with what we as humans are aiming for and under control.

Moreover, it seems we have already made a start in marrying the two technologies. For instance, Talla, a leading company in enterprise AI software recognizes that currently, “no system exists to map an auditable, decentralized trail that reflects the autonomous decisions made by bots, nor does any system provide a way to retrain machine learning models to address bad machine behaviours”.

As such, Talla is creating ‘BotChain’ on the Ethereum platform which, “allows bot developers, enterprises, software companies, and system integrators to verify bot identity, audit interactions, and control the boundaries of bot autonomy”. For every action taken by an AI, the BotChain will issue a digital certificate which eventually forms a chain of encrypted documents. This will at least ensure that the actions being taken by bots are being monitored, and that regulatory boundaries are being respected.

What’s more, Talla CEO Rob May also observes that bots are starting to be “spoofed”, making bot identity a serious issue. According to Mr. May, therefore, “BotChain is an identity solution for bots, so that as humans interact with more bots, or bots interact with each other, they can quickly establish trust.”

As alluded to by Mr. May, interaction between humans and AI is of critical importance to the future evolution of machines. The more that AI is perceived as being complementary to human progress, it can be developed as an extension of ourselves, rather than as a threatening, separate alien entity. As Bostrom has acknowledged, “I believe the answer is to create super-intelligent AI such that even if or when it escapes, it is still safe because it is fundamentally on our side, because it shares our values.”

By requiring a network of nodes to reach consensus on what is the truth and what is not, blockchain can help ensure that machine values align with our values, without getting to a stage of having to resort to pushing a big red button to ensure survival of our species.

Admittedly blockchain does continue to suffer from scalability limitations, which may well be put further under the spotlight if/when supporting machine learning, a technology that relies on huge reams of bulk data to analyse and process patterns, and ultimately to evolve.

Nonetheless, the level of security and immutability that blockchain offers can ensure that machines won’t be able to cheat or circumvent their way to a state of unchallenged supremacy.

And that is no small thing.

Dr Justin Chan Dr Chan founded DataDrivenInvestor.com (DDI) and is the CEO for JCube Capital Partners. Specialized in strategy development, alternative data analytics and behavioral finance, Dr Chan also has extensive experience in investment management and financial services industries. Prior to forming JCube and DDI, Dr Chan served in the capacity of strategy development in multiple hedge funds, fintech companies, and also served as a senior quantitative strategist at GMO. A published author at professional journals in finance, Dr. Chan holds a Ph.D. degree in finance from UCLA.

Leave a Reply

Your email address will not be published. Required fields are marked *